Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Numpy]flip #15819

Merged
merged 3 commits into from
Sep 22, 2019
Merged

[Numpy]flip #15819

merged 3 commits into from
Sep 22, 2019

Conversation

tingying2020
Copy link
Contributor

Description

operator flip

@tingying2020 tingying2020 requested a review from szha as a code owner August 9, 2019 07:25
@yzhliu yzhliu added the Numpy label Aug 9, 2019
@@ -0,0 +1,67 @@
/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the code to the existing np_matrix_op* files.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

rtol=1e-3
atol=1e-5
for shape in shapes1:
for axis in [None]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you just have 1 thing to test then do not have a for-loop, just do

for shape in shapes1:
    axis = None
    # your test code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thank you.

class TestFlip(HybridBlock):
def __init__(self, axis):
super(TestFlip, self).__init__()
# Necessary initialization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Get rid of this line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thank you.

for oneType in types:
if oneType is 'float16':
rtol=1e-2
atol=1e-2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you need higher tolerance for float16 type? flip is a kind of copy, there should be no numerical instability issues.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thank you.


@with_seed()
@use_np
def test_np_flip():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You do not need to add the same test once again in test_operator_gpu.py there's an import up above already. Delete this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thank you.

return F.np.flip(x, self.axis)

shapes1 = [(1, 2, 3), (1, 0), (3, 0, 2), (), (1,)]
shapes2 = [(2, 2, 3, 3), (1, 1, 2, 4)]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we merge the test cases of shapes1 and shapes2 into one for loop?

def hybrid_forward(self, F, x):
return F.np.flip(x, self.axis)

shapes1 = [(1, 2, 3), (1, 0), (3, 0, 2), (), (1,)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shapes = [(1, 2, 3), (1, 0), ()]

mx_out = np.flip(x, axis)
np_out = _np.flip(x.asnumpy(), axis)
assert_almost_equal(mx_out.asnumpy(), np_out, rtol=rtol, atol=atol)
for shape in shapes2:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then you can get rid of this loop.

if (inputs[0].shape_.ndim() == 0) {
mshadow::Stream<xpu> *s = ctx.get_stream<xpu>();
MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
mxnet_op::Kernel<flip0dim_shared_kernel, xpu>::Launch(s, inputs[0].Size(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please directly call copy function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}
};

struct flip0dim_shared_kernel {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Get rid of this kernel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor

@haojin2 haojin2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Ying and others added 2 commits September 22, 2019 10:46
* Implement flip

* fix some bug and add gpu test

* register param and edit test

* add testcase for backward

* remove print

* optimize 0-dim and 0-shape

* adjust format and add doc in _symbol.py

* fix bug in symbol

* add flip in __all__

* fix format error

* import ndarray

* move flip implementation to np_matrix_op and remove test in gpu

* delate redundant blank line

* fix error in review

* remove **kwargs and change copy

* fix error in review
@reminisce reminisce merged commit dccfc88 into apache:master Sep 22, 2019
drivanov pushed a commit to drivanov/incubator-mxnet that referenced this pull request Sep 26, 2019
* Numpy flip operator

* Implement flip

* fix some bug and add gpu test

* register param and edit test

* add testcase for backward

* remove print

* optimize 0-dim and 0-shape

* adjust format and add doc in _symbol.py

* fix bug in symbol

* add flip in __all__

* fix format error

* import ndarray

* move flip implementation to np_matrix_op and remove test in gpu

* delate redundant blank line

* fix error in review

* remove **kwargs and change copy

* fix error in review

* Fix import

* Fix lint
larroy pushed a commit to larroy/mxnet that referenced this pull request Sep 28, 2019
* Numpy flip operator

* Implement flip

* fix some bug and add gpu test

* register param and edit test

* add testcase for backward

* remove print

* optimize 0-dim and 0-shape

* adjust format and add doc in _symbol.py

* fix bug in symbol

* add flip in __all__

* fix format error

* import ndarray

* move flip implementation to np_matrix_op and remove test in gpu

* delate redundant blank line

* fix error in review

* remove **kwargs and change copy

* fix error in review

* Fix import

* Fix lint
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants